284 research outputs found

    Efficient deterministic approximate counting for low-degree polynomial threshold functions

    Full text link
    We give a deterministic algorithm for approximately counting satisfying assignments of a degree-dd polynomial threshold function (PTF). Given a degree-dd input polynomial p(x1,,xn)p(x_1,\dots,x_n) over RnR^n and a parameter ϵ>0\epsilon> 0, our algorithm approximates Prx{1,1}n[p(x)0]\Pr_{x \sim \{-1,1\}^n}[p(x) \geq 0] to within an additive ±ϵ\pm \epsilon in time Od,ϵ(1)poly(nd)O_{d,\epsilon}(1)\cdot \mathop{poly}(n^d). (Any sort of efficient multiplicative approximation is impossible even for randomized algorithms assuming NPRPNP\not=RP.) Note that the running time of our algorithm (as a function of ndn^d, the number of coefficients of a degree-dd PTF) is a \emph{fixed} polynomial. The fastest previous algorithm for this problem (due to Kane), based on constructions of unconditional pseudorandom generators for degree-dd PTFs, runs in time nOd,c(1)ϵcn^{O_{d,c}(1) \cdot \epsilon^{-c}} for all c>0c > 0. The key novel contributions of this work are: A new multivariate central limit theorem, proved using tools from Malliavin calculus and Stein's Method. This new CLT shows that any collection of Gaussian polynomials with small eigenvalues must have a joint distribution which is very close to a multidimensional Gaussian distribution. A new decomposition of low-degree multilinear polynomials over Gaussian inputs. Roughly speaking we show that (up to some small error) any such polynomial can be decomposed into a bounded number of multilinear polynomials all of which have extremely small eigenvalues. We use these new ingredients to give a deterministic algorithm for a Gaussian-space version of the approximate counting problem, and then employ standard techniques for working with low-degree PTFs (invariance principles and regularity lemmas) to reduce the original approximate counting problem over the Boolean hypercube to the Gaussian version

    Explicit Optimal Hardness via Gaussian stability results

    Get PDF
    The results of Raghavendra (2008) show that assuming Khot's Unique Games Conjecture (2002), for every constraint satisfaction problem there exists a generic semi-definite program that achieves the optimal approximation factor. This result is existential as it does not provide an explicit optimal rounding procedure nor does it allow to calculate exactly the Unique Games hardness of the problem. Obtaining an explicit optimal approximation scheme and the corresponding approximation factor is a difficult challenge for each specific approximation problem. An approach for determining the exact approximation factor and the corresponding optimal rounding was established in the analysis of MAX-CUT (KKMO 2004) and the use of the Invariance Principle (MOO 2005). However, this approach crucially relies on results explicitly proving optimal partitions in Gaussian space. Until recently, Borell's result (Borell 1985) was the only non-trivial Gaussian partition result known. In this paper we derive the first explicit optimal approximation algorithm and the corresponding approximation factor using a new result on Gaussian partitions due to Isaksson and Mossel (2012). This Gaussian result allows us to determine exactly the Unique Games Hardness of MAX-3-EQUAL. In particular, our results show that Zwick algorithm for this problem achieves the optimal approximation factor and prove that the approximation achieved by the algorithm is 0.796\approx 0.796 as conjectured by Zwick. We further use the previously known optimal Gaussian partitions results to obtain a new Unique Games Hardness factor for MAX-k-CSP : Using the well known fact that jointly normal pairwise independent random variables are fully independent, we show that the the UGC hardness of Max-k-CSP is (k+1)/22k1\frac{\lceil (k+1)/2 \rceil}{2^{k-1}}, improving on results of Austrin and Mossel (2009)

    Noisy population recovery in polynomial time

    Full text link
    In the noisy population recovery problem of Dvir et al., the goal is to learn an unknown distribution ff on binary strings of length nn from noisy samples. For some parameter μ[0,1]\mu \in [0,1], a noisy sample is generated by flipping each coordinate of a sample from ff independently with probability (1μ)/2(1-\mu)/2. We assume an upper bound kk on the size of the support of the distribution, and the goal is to estimate the probability of any string to within some given error ε\varepsilon. It is known that the algorithmic complexity and sample complexity of this problem are polynomially related to each other. We show that for μ>0\mu > 0, the sample complexity (and hence the algorithmic complexity) is bounded by a polynomial in kk, nn and 1/ε1/\varepsilon improving upon the previous best result of poly(kloglogk,n,1/ε)\mathsf{poly}(k^{\log\log k},n,1/\varepsilon) due to Lovett and Zhang. Our proof combines ideas from Lovett and Zhang with a \emph{noise attenuated} version of M\"{o}bius inversion. In turn, the latter crucially uses the construction of \emph{robust local inverse} due to Moitra and Saks

    Majority is Stablest : Discrete and SoS

    Get PDF
    The Majority is Stablest Theorem has numerous applications in hardness of approximation and social choice theory. We give a new proof of the Majority is Stablest Theorem by induction on the dimension of the discrete cube. Unlike the previous proof, it uses neither the "invariance principle" nor Borell's result in Gaussian space. The new proof is general enough to include all previous variants of majority is stablest such as "it ain't over until it's over" and "Majority is most predictable". Moreover, the new proof allows us to derive a proof of Majority is Stablest in a constant level of the Sum of Squares hierarchy.This implies in particular that Khot-Vishnoi instance of Max-Cut does not provide a gap instance for the Lasserre hierarchy

    Non interactive simulation of correlated distributions is decidable

    Get PDF
    A basic problem in information theory is the following: Let P=(X,Y)\mathbf{P} = (\mathbf{X}, \mathbf{Y}) be an arbitrary distribution where the marginals X\mathbf{X} and Y\mathbf{Y} are (potentially) correlated. Let Alice and Bob be two players where Alice gets samples {xi}i1\{x_i\}_{i \ge 1} and Bob gets samples {yi}i1\{y_i\}_{i \ge 1} and for all ii, (xi,yi)P(x_i, y_i) \sim \mathbf{P}. What joint distributions Q\mathbf{Q} can be simulated by Alice and Bob without any interaction? Classical works in information theory by G{\'a}cs-K{\"o}rner and Wyner answer this question when at least one of P\mathbf{P} or Q\mathbf{Q} is the distribution on {0,1}×{0,1}\{0,1\} \times \{0,1\} where each marginal is unbiased and identical. However, other than this special case, the answer to this question is understood in very few cases. Recently, Ghazi, Kamath and Sudan showed that this problem is decidable for Q\mathbf{Q} supported on {0,1}×{0,1}\{0,1\} \times \{0,1\}. We extend their result to Q\mathbf{Q} supported on any finite alphabet. We rely on recent results in Gaussian geometry (by the authors) as well as a new \emph{smoothing argument} inspired by the method of \emph{boosting} from learning theory and potential function arguments from complexity theory and additive combinatorics.Comment: The reduction for non-interactive simulation for general source distribution to the Gaussian case was incorrect in the previous version. It has been rectified no
    corecore